Common Property and Logistics

The Common Property and Logistics Department is responsible for all tasks related to the operation and maintenance of buildings: in addition to the 4500 m2 of the CPPM, there is the ANTARES/KM3NeT experiment control station located in Seyne sur Mer.

The Common Property and Logistics Department supervises all modification or maintenance work on buildings. He works in collaboration with the technical services of the CNRS and the University for the integration of the CPPM when setting up multi-laboratory contracts.

In addition, the Common Property and Logistics Department provides support for events outside the laboratory such as the Science Villages or any other event such as certain conferences requiring the transport and assembly of equipment.

Computing infrastructures

  • General computing whose missions are the installation and monitoring of hardware and software used by all (220 PCs, 230 servers, 300 laptops). We also manage DEC, an HPC computing cluster with 1560 cores and 15 TO of memory.
  • The installation of a computing node connected to the computing grids and of an OpenStack Cloud;
View of the CPPM computer room grid node © Camille Moirenc
View of the CPPM computer room grid node © Camille Moirenc
View of the CPPM computer room grid node © Camille Moirenc

Head of the department: Adrien Rivière

Exploitation team:

Computing grid and Cloud team:

Infrastructure - Microsoft OS installation and maintenance : clients (Wiondows 10) and servers (Windows 2019)
- Centralized antivirus and application maintenance for Windows workstations. - Application server management under Windows server - Installation and maintenance of systems under Linux and MacOsX - Network infrastrucure

  • Grid computing, Cloud, Virtualization
    • Distributed data storage and systems
    • Infrastructure and equipment monitoring

2022 :

  • Installation of Windows applications servers
  • Change central storage server

2021 :

  • Update to Windows 10

2020 :

  • Update of the Grid/Cloud network to 100 Gbps
  • Complete reorganisation of the support due to Covid-19
  • Change of the WiFi appliance with Alcasar
  • General use of IP telephony
  • Installation and use of GPU cards
  • Creation of the High-Performance Computing plateform

2019 :

  • Cloud: Pre-production of a M3AMU cloud node (400 cores, 200 TB)
  • Update of the internal network at 10 Gbps

2018 :

  • Grid: Installation of Arc/CE on Centos7 in the grid infrastructure, replacing Torque/Maui on SL6
  • Operation: Switch Windows services in W2016